1,960 research outputs found

    SEAI: Social Emotional Artificial Intelligence Based on Damasio's Theory of Mind

    Get PDF
    A socially intelligent robot must be capable to extract meaningful information in real-time from the social environment and react accordingly with coherent human-like behaviour. Moreover, it should be able to internalise this information, to reason on it at a higher abstract level, build its own opinions independently and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behaviour and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters. In order to develop cognitive systems for social robotics with greater human-likeliness, we used an "understanding by building" approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modelling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control. The SEAI system is also enriched by a model which simulate the Damasio's theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalisation at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot's beliefs and decisions have been tested in a physical humanoid involved in Human-Robot Interaction (HRI)

    Aumento de eficiência energética de uma instalação de geração de vapor por meio do reaproveitamento de energia em linhas de retorno de condensado

    Get PDF
    The present work was developed in a system of generation, distribution and use of thermal energy in the form of steam, to increase the energy efficiency of a process plant in the interior of the State of Minas Gerais. For this, a previous study was carried out to verify the amount of flash vapor eliminated before the implementation of a new system, in which part of the condensate produced by indirect use of steam of the whole factory was lost. After the analysis of the initial design, the relevant modifications were made, reusing by complete the condensate from the indirect use of steam of the whole industrial plant, to reduce the amount of fuel used in the steam generator. Finally, a quantitative analysis of energy and financial resources was carried out to verify the efficiency of implementation of the new process, which currently works in the plant in question.Trabalho de Conclusão de Curso (Graduação)O presente trabalho foi desenvolvido com base em uma modificação de um sistema real de geração, distribuição e utilização de energia térmica na forma de vapor. O objetivo principal era aumentar a eficiência energética de uma fábrica de processo no interior do Estado de São Paulo. Para isso, foi realizado um estudo prévio para verificar a quantidade de vapor “flash” eliminado em uma planta já existente, antes da implementação de um possível novo sistema. Duas situações foram observadas, na primeira parte do condensado produzido por utilização indireta de vapor de toda a fábrica era perdido e na segunda, nenhum condensado era reutilizado. Após a análise do projeto inicial, foram realizadas modificações pertinentes, reutilizando por completo o condensado oriundo da utilização indireta de vapor de toda a planta industrial, para reduzir assim, a quantidade de combustível utilizado no gerador de vapor. Por fim, foi realizado uma análise quantitativa de energia e recursos financeiros para constatar a eficiência de implementação do novo processo, o qual funciona atualmente na fábrica em questão

    Feasibility Study and Design of a Wearable System-on-a-Chip Pulse Radar for Contactless Cardiopulmonary Monitoring

    Get PDF
    A new system-on-a-chip radar sensor for next-generation wearable wireless interface applied to the human health care and safeguard is presented. The system overview is provided and the feasibility study of the radar sensor is presented. In detail, the overall system consists of a radar sensor for detecting the heart and breath rates and a low-power IEEE 802.15.4 ZigBee radio interface, which provides a wireless data link with remote data acquisition and control units. In particular, the pulse radar exploits 3.1–10.6 GHz ultra-wideband signals which allow a significant reduction of the transceiver complexity and then of its power consumption. The operating principle of the radar for the cardiopulmonary monitoring is highlighted and the results of the system analysis are reported. Moreover, the results obtained from the building-blocks design, the channel measurement, and the ultra-wideband antenna realization are reported

    The influence of dynamics and speech on understanding humanoid facial expressions

    Get PDF
    Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability

    Coexistence of amplitude and frequency modulations in intracellular calcium dynamics

    Full text link
    The complex dynamics of intracellular calcium regulates cellular responses to information encoded in extracellular signals. Here, we study the encoding of these external signals in the context of the Li-Rinzel model. We show that by control of biophysical parameters the information can be encoded in amplitude modulation, frequency modulation or mixed (AM and FM) modulation. We briefly discuss the possible implications of this new role of information encoding for astrocytes.Comment: 4 pages, 4 figure

    Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    Get PDF
    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models

    A Multimodal Perception Framework for Users Emotional State Assessment in Social Robotics

    Get PDF
    In this work, we present an unobtrusive and non-invasive perception framework based on the synergy between two main acquisition systems: the Touch-Me Pad, consisting of two electronic patches for physiological signal extraction and processing; and the Scene Analyzer, a visual-auditory perception system specifically designed for the detection of social and emotional cues. It will be explained how the information extracted by this specific kind of framework is particularly suitable for social robotics applications and how the system has been conceived in order to be used in human-robot interaction scenarios

    Wearable Textile Platform for Assessing Stroke Patient Treatment in Daily Life Conditions

    Get PDF
    Monitoring physical activities during post-stroke rehabilitation in daily life may help physicians to optimize and tailor the training program for patients. The European research project INTERACTION (FP7-ICT-2011-7-287351) evaluated motor capabilities in stroke patients during the recovery treatment period. We developed wearable sensing platform based on the sensor fusion among inertial, knitted piezoresistive sensors and textile EMG electrodes. The device was conceived in modular form and consists of a separate shirt, trousers, glove, and shoe. Thanks to the novel fusion approach it has been possible to develop a model for the shoulder taking into account the scapulo-thoracic joint of the scapular girdle, considerably improving the estimation of the hand position in reaching activities. In order to minimize the sensor set used to monitor gait, a single inertial sensor fused with a textile goniometer proved to reconstruct the orientation of all the body segments of the leg. Finally, the sensing glove, endowed with three textile goniometers and three force sensors showed good capabilities in the reconstruction of grasping activities and evaluating the interaction of the hand with the environment, according to the project specifications. This paper reports on the design and the technical evaluation of the performance of the sensing platform, tested on healthy subjects
    corecore